Despite their widespread adoption, neural conversation models have yet to exhibit natural chat capabilities with humans. In this research, we examine user utterances as causes and generated responses as effects, recognizing that changes in a cause should produce a different effect. To further explore this concept, we have compiled and expanded upon a new dataset called CausalDialogue through crowd-sourcing. This dataset includes multiple cause-effect pairs within a directed acyclic graph (DAG) structure. Our analysis reveals that traditional loss functions can struggle to effectively incorporate the DAG structure, leading us to propose a causality-enhanced method called Exponential Maximum Average Treatment Effect (ExMATE) to enhance the impact of causality at the utterance level in training neural conversation models. To evaluate the effectiveness of this approach, we have built a comprehensive benchmark using the CausalDialogue dataset leveraging large-scale pre-trained language models, and have assessed the results through both human and automatic evaluation metrics for coherence, diversity, and agility. Our findings show that current techniques are still unable to effectively address conversational DAGs, and that the ExMATE method can improve the diversity and agility of conventional loss functions while maintaining coherence.
translated by 谷歌翻译
Is it possible to leverage large scale raw and raw parallel corpora to build a general learned metric? Existing learned metrics have gaps to human judgements, are model-dependent or are limited to the domains or tasks where human ratings are available. In this paper, we propose SEScore2, a model-based metric pretrained over million-scale synthetic dataset constructed by our novel retrieval augmented data synthesis pipeline. SEScore2 achieves high correlation to human judgements without any human rating supervisions. Importantly, our unsupervised SEScore2 can outperform supervised metrics, which are trained on the News human ratings, at the TED domain. We evaluate SEScore2 over four text generation tasks across three languages. SEScore2 outperforms all prior unsupervised evaluation metrics in machine translation, speech translation, data-to-text and dialogue generation, with average Kendall improvements 0.158. SEScore2 even outperforms SOTA supervised BLEURT at data-to-text, dialogue generation and overall correlation.
translated by 谷歌翻译
视觉导航要求代理商遵循自然语言说明以达到特定目标。可见的环境和看不见的环境之间的巨大差异使代理商概括良好的挑战。先前的研究提出了数据增强方法,以明确或隐式地减轻数据偏见并提供概括的改进。但是,他们试图记住增强的轨迹,并在测试时忽略在看不见的环境下的分布变化。在本文中,我们提出了一个看不见的差异,预期视力和语言导航(戴维斯),该差异通过鼓励测试时间的视觉一致性来概括为看不见的环境。具体来说,我们设计了:1)半监督框架戴维斯(Davis),该框架利用类似的语义观测来利用视觉一致性信号。 2)一个两阶段的学习程序,鼓励适应测试时间分布。该框架增强了模仿和强化学习的基本混合物与动量形成对比,以鼓励在联合训练阶段和测试时间适应阶段对类似观察的稳定决策。广泛的实验表明,戴维斯在R2R和RXR基准上实现了与先前最先进的VLN基线相比,取得了模型不合命源性的改进。我们的源代码和数据是补充材料。
translated by 谷歌翻译
联合学习(FL)提供了一个有效的范式,可以通过隐私保护训练机器学习模型。但是,最近的研究表明,由于可能是恶意和异质的当地代理商,FL受到各种安全,隐私和公平威胁的约束。例如,它容易受到仅贡献低质量数据的本地对抗药物的攻击,目的是损害具有高质量数据的人的性能。因此,这种攻击破坏了FL中公平性的现有定义,主要集中于某种绩效奇偶校验的概念。在这项工作中,我们旨在解决此限制,并通过对FL(FAA)的代理意识(FAA)提出正式的公平定义,该定义将当地代理的异质数据贡献考虑在内。此外,我们提出了基于代理聚类(焦点)的公平FL培训算法以实现FAA。从理论上讲,我们证明了线性模型的温和条件下的聚焦和最优性,并且具有有界平滑度的一般凸丢失函数。我们还证明,在线性模型和一般凸损耗函数下,与标准的FedAvg协议相比,FAA始终达到FAA衡量的更高公平性。从经验上讲,我们评估对四个数据集的重点,包括不同设置下的合成数据,图像和文本,并且我们表明,与FedAvg相比,基于FAA的焦点基于FAA的公平性显着更高,同时保持相似甚至更高的预测准确性。
translated by 谷歌翻译
主要跟踪器基于先前的预测或初始边界框作为模型输入(即搜索区域)生成固定尺寸的矩形区域。尽管这种方式导致了提高的跟踪效率,但固定尺寸的搜索区域缺乏灵活性,并且在情况下可能会失败,例如快速运动和干扰物干扰。由于搜索区域有限,跟踪器往往会丢失目标对象,或者由于搜索区域过多而受到干扰因素的干扰。在这项工作中,我们提出了一个新颖的跟踪范式,称为搜索区域调节跟踪(SRRT),该范式应用了建议的搜索区域调节器,以动态地估算每个帧的最佳搜索区域。为了调整对象在跟踪过程中的外观变化,我们进一步提出了锁定状态确定的更新策略以进行参考框架更新。我们的SRRT框架在没有精美设计的情况下非常简洁,但在七个具有挑战性的基准方面,与其他最先进的跟踪器有关基线的改进和竞争成果明显。在大规模的Lasot基准测试中,我们的SRRT改善了siamrpn ++和Transt,其绝对增长为4.6%和3.1%。
translated by 谷歌翻译
语言规划旨在通过分解为更简单的低级步骤来实现复杂的高级目标。这种程序推理能力对于诸如家用机器人和虚拟助手等应用至关重要。尽管语言规划是日常生活中人类的基本技能,但对于缺乏现实世界中缺乏深层常识性知识的大型语言模型(LLM)来说,这仍然是一个挑战。以前的方法需要手动示例或带注释的程序才能从LLM中获取此类能力。相比之下,本文提出了神经符号的因果语言规划师(CLAP),该策划者通过注入常识的提示从LLM中引起了程序知识。 LLMS中的预训练知识本质上是一种未观察到的混杂因素,它在任务和行动计划之间引起虚假的相关性。通过结构性因果模型(SCM)的镜头,我们提出了一个有效的策略,以构建提示作为对SCM的因果干预。我们的策略使用图形采样技术和符号程序执行者,正式从常识知识基础上形成结构化因果提示。拍手在Wikihow和机器人上获得最新的表现,在反事实环境下,人类评估的相对提高了5.28%。这表明在语义和顺序的因果语言规划中拍手的优势。
translated by 谷歌翻译
专家员工的文字式传输技术有可能改善科学社区成员与公众之间的沟通。专家制作的高质量信息往往充满了困难的术语外国人,努力了解。这是医疗领域的一个特别值得注意的问题,其中Layman经常在线医学文本混淆。目前,两个瓶颈干扰了建立高质量医学专家外延式转移系统的目标:曾经专家和外行术语的缺点是普及的预押医学域语言模型,缺乏并行的Corpora培训转让任务本身。为了缓解第一个问题,我们提出了一种新颖的语言模型(LM)预测任务,知识基础同化,从自我监督学习期间将来自专家和外行式医学术语术语的边缘的预先训练数据综合为LM的LM。 。要缓解第二个问题,我们使用基于边缘的标准在医学专家 - Layman域中建立大规模并行语料库。我们的实验表明,基于变压器的模型,以知识库同化和其他良好的预先预订任务对我们的新并行语料库进行了微调,这导致专家外部转账基准的相当大,达到了我们人类评估的平均相对改善总体成功率(OSR),达106%。我们释放我们的代码和并行语料库以供未来的研究。
translated by 谷歌翻译
With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulator's output using unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training: (i) a 'self-regularization' term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data.
translated by 谷歌翻译
General deep learning-based methods for infrared and visible image fusion rely on the unsupervised mechanism for vital information retention by utilizing elaborately designed loss functions. However, the unsupervised mechanism depends on a well designed loss function, which cannot guarantee that all vital information of source images is sufficiently extracted. In this work, we propose a novel interactive feature embedding in self-supervised learning framework for infrared and visible image fusion, attempting to overcome the issue of vital information degradation. With the help of self-supervised learning framework, hierarchical representations of source images can be efficiently extracted. In particular, interactive feature embedding models are tactfully designed to build a bridge between the self-supervised learning and infrared and visible image fusion learning, achieving vital information retention. Qualitative and quantitative evaluations exhibit that the proposed method performs favorably against state-of-the-art methods.
translated by 谷歌翻译
The formalization of existing mathematical proofs is a notoriously difficult process. Despite decades of research on automation and proof assistants, writing formal proofs remains arduous and only accessible to a few experts. While previous studies to automate formalization focused on powerful search algorithms, no attempts were made to take advantage of available informal proofs. In this work, we introduce Draft, Sketch, and Prove (DSP), a method that maps informal proofs to formal proof sketches, and uses the sketches to guide an automated prover by directing its search to easier sub-problems. We investigate two relevant setups where informal proofs are either written by humans or generated by a language model. Our experiments and ablation studies show that large language models are able to produce well-structured formal sketches that follow the same reasoning steps as the informal proofs. Guiding an automated prover with these sketches enhances its performance from 20.9% to 39.3% on a collection of mathematical competition problems.
translated by 谷歌翻译